Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Front Neurosci ; 17: 1239764, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37790587

RESUMO

Introduction: Hyperspectral imaging (HSI) has shown promise in the field of intra-operative imaging and tissue differentiation as it carries the capability to provide real-time information invisible to the naked eye whilst remaining label free. Previous iterations of intra-operative HSI systems have shown limitations, either due to carrying a large footprint limiting ease of use within the confines of a neurosurgical theater environment, having a slow image acquisition time, or by compromising spatial/spectral resolution in favor of improvements to the surgical workflow. Lightfield hyperspectral imaging is a novel technique that has the potential to facilitate video rate image acquisition whilst maintaining a high spectral resolution. Our pre-clinical and first-in-human studies (IDEAL 0 and 1, respectively) demonstrate the necessary steps leading to the first in-vivo use of a real-time lightfield hyperspectral system in neuro-oncology surgery. Methods: A lightfield hyperspectral camera (Cubert Ultris ×50) was integrated in a bespoke imaging system setup so that it could be safely adopted into the open neurosurgical workflow whilst maintaining sterility. Our system allowed the surgeon to capture in-vivo hyperspectral data (155 bands, 350-1,000 nm) at 1.5 Hz. Following successful implementation in a pre-clinical setup (IDEAL 0), our system was evaluated during brain tumor surgery in a single patient to remove a posterior fossa meningioma (IDEAL 1). Feedback from the theater team was analyzed and incorporated in a follow-up design aimed at implementing an IDEAL 2a study. Results: Focusing on our IDEAL 1 study results, hyperspectral information was acquired from the cerebellum and associated meningioma with minimal disruption to the neurosurgical workflow. To the best of our knowledge, this is the first demonstration of HSI acquisition with 100+ spectral bands at a frame rate over 1Hz in surgery. Discussion: This work demonstrated that a lightfield hyperspectral imaging system not only meets the design criteria and specifications outlined in an IDEAL-0 (pre-clinical) study, but also that it can translate into clinical practice as illustrated by a successful first in human study (IDEAL 1). This opens doors for further development and optimisation, given the increasing evidence that hyperspectral imaging can provide live, wide-field, and label-free intra-operative imaging and tissue differentiation.

2.
J Med Imaging (Bellingham) ; 10(4): 046001, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37492187

RESUMO

Purpose: Hyperspectral imaging shows promise for surgical applications to non-invasively provide spatially resolved, spectral information. For calibration purposes, a white reference image of a highly reflective Lambertian surface should be obtained under the same imaging conditions. Standard white references are not sterilizable and so are unsuitable for surgical environments. We demonstrate the necessity for in situ white references and address this by proposing a novel, sterile, synthetic reference construction algorithm. Approach: The use of references obtained at different distances and lighting conditions to the subject were examined. Spectral and color reconstructions were compared with standard measurements qualitatively and quantitatively, using ΔE and normalized RMSE, respectively. The algorithm forms a composite image from a video of a standard sterile ruler, whose imperfect reflectivity is compensated for. The reference is modeled as the product of independent spatial and spectral components, and a scalar factor accounting for gain, exposure, and light intensity. Evaluation of synthetic references against ideal but non-sterile references is performed using the same metrics alongside pixel-by-pixel errors. Finally, intraoperative integration is assessed though cadaveric experiments. Results: Improper white balancing leads to increases in all quantitative and qualitative errors. Synthetic references achieve median pixel-by-pixel errors lower than 6.5% and produce similar reconstructions and errors to an ideal reference. The algorithm integrated well into surgical workflow, achieving median pixel-by-pixel errors of 4.77% while maintaining good spectral and color reconstruction. Conclusions: We demonstrate the importance of in situ white referencing and present a novel synthetic referencing algorithm. This algorithm is suitable for surgery while maintaining the quality of classical data reconstruction.

3.
Int J Comput Assist Radiol Surg ; 16(8): 1347-1356, 2021 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-33937966

RESUMO

PURPOSE: Image-guided surgery (IGS) is an integral part of modern neuro-oncology surgery. Navigated ultrasound provides the surgeon with reconstructed views of ultrasound data, but no commercial system presently permits its integration with other essential non-imaging-based intraoperative monitoring modalities such as intraoperative neuromonitoring. Such a system would be particularly useful in skull base neurosurgery. METHODS: We established functional and technical requirements of an integrated multi-modality IGS system tailored for skull base surgery with the ability to incorporate: (1) preoperative MRI data and associated 3D volume reconstructions, (2) real-time intraoperative neurophysiological data and (3) live reconstructed 3D ultrasound. We created an open-source software platform to integrate with readily available commercial hardware. We tested the accuracy of the system's ultrasound navigation and reconstruction using a polyvinyl alcohol phantom model and simulated the use of the complete navigation system in a clinical operating room using a patient-specific phantom model. RESULTS: Experimental validation of the system's navigated ultrasound component demonstrated accuracy of [Formula: see text] and a frame rate of 25 frames per second. Clinical simulation confirmed that system assembly was straightforward, could be achieved in a clinically acceptable time of [Formula: see text] and performed with a clinically acceptable level of accuracy. CONCLUSION: We present an integrated open-source research platform for multi-modality IGS. The present prototype system was tailored for neurosurgery and met all minimum design requirements focused on skull base surgery. Future work aims to optimise the system further by addressing the remaining target requirements.


Assuntos
Monitorização Intraoperatória/métodos , Procedimentos Neurocirúrgicos/métodos , Imagens de Fantasmas , Base do Crânio/cirurgia , Cirurgia Assistida por Computador/métodos , Humanos , Imageamento por Ressonância Magnética , Base do Crânio/diagnóstico por imagem , Software , Ultrassonografia
4.
Int J Comput Assist Radiol Surg ; 14(7): 1167-1176, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-30989505

RESUMO

PURPOSE: Colorectal cancer is the third most common cancer worldwide, and early therapeutic treatment of precancerous tissue during colonoscopy is crucial for better prognosis and can be curative. Navigation within the colon and comprehensive inspection of the endoluminal tissue are key to successful colonoscopy but can vary with the skill and experience of the endoscopist. Computer-assisted interventions in colonoscopy can provide better support tools for mapping the colon to ensure complete examination and for automatically detecting abnormal tissue regions. METHODS: We train the conditional generative adversarial network pix2pix, to transform monocular endoscopic images to depth, which can be a building block in a navigational pipeline or be used to measure the size of polyps during colonoscopy. To overcome the lack of labelled training data in endoscopy, we propose to use simulation environments and to additionally train the generator and discriminator of the model on unlabelled real video frames in order to adapt to real colonoscopy environments. RESULTS: We report promising results on synthetic, phantom and real datasets and show that generative models outperform discriminative models when predicting depth from colonoscopy images, in terms of both accuracy and robustness towards changes in domains. CONCLUSIONS: Training the discriminator and generator of the model on real images, we show that our model performs implicit domain adaptation, which is a key step towards bridging the gap between synthetic and real data. Importantly, we demonstrate the feasibility of training a single model to predict depth from both synthetic and real images without the need for explicit, unsupervised transformer networks mapping between the domains of synthetic and real data.


Assuntos
Colonoscopia/métodos , Neoplasias Colorretais/diagnóstico , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Humanos , Imagens de Fantasmas
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...